A recent explosion of research focuses on developing methods and tools for building fair predictive models. However, most of this work relies on the assumption that the training and testing data are representative of the target population on which the model will be deployed. However, real-world training data often suffer from selection bias and are not representative of the target population for many reasons, including the cost and feasibility of collecting and labeling data, historical discrimination, and individual biases. In this paper, we introduce a new framework for certifying and ensuring the fairness of predictive models trained on biased data. We take inspiration from query answering over incomplete and inconsistent databases to present and formalize the problem of consistent range approximation (CRA) of answers to queries about aggregate information for the target population. We aim to leverage background knowledge about the data collection process, biased data, and limited or no auxiliary data sources to compute a range of answers for aggregate queries over the target population that are consistent with available information. We then develop methods that use CRA of such aggregate queries to build predictive models that are certifiably fair on the target population even when no external information about that population is available during training. We evaluate our methods on real data and demonstrate improvements over state of the art. Significantly, we show that enforcing fairness using our methods can lead to predictive models that are not only fair, but more accurate on the target population.
translated by 谷歌翻译
Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.
translated by 谷歌翻译
Knowledge distillation (KD) has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower-capacity models. Online distillation, in which both the teacher and the student are learning collaboratively, has also gained much interest due to its ability to improve on the performance of the networks involved. The Kullback-Leibler (KL) divergence ensures the proper knowledge transfer between the teacher and student. However, most online KD techniques present some bottlenecks under the network capacity gap. By cooperatively and simultaneously training, the models the KL distance becomes incapable of properly minimizing the teacher's and student's distributions. Alongside accuracy, critical edge device applications are in need of well-calibrated compact networks. Confidence calibration provides a sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of Divergences for online Knowledge Distillation. We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process. We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network. We conducted extensive experiments using a variety of network architectures and show improvements on multiple datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach through comprehensive comparisons and ablations with current state-of-the-art online and offline KD techniques.
translated by 谷歌翻译
Google,Amazon和Microsoft等提供商提供的商业ML API已在许多应用程序中大大简化了ML的采用。许多公司和学者都为使用ML API用于对象检测,OCR和情感分析等任务。处理相同任务的不同ML API可能具有非常异构的性能。此外,API的基础模型也随着时间的推移而发展。随着ML API迅速成为一个有价值的市场,并且是消耗机器学习的广泛方式,因此系统地研究和比较不同的API并表征API随时间变化的方式至关重要。但是,由于缺乏数据,目前该主题目前没有被忽视。在本文中,我们介绍了HAPI(API的历史),该数据集由1,761,417个商业ML API应用程序(涉及来自亚马逊,Google,IBM,Microsoft和其他提供商的API),包括图像标签,文本识别和文本识别和文本识别和文本,从2020年到2022年的挖掘。每个实例都由API的查询输入(例如图像或文本)以及API的输出预测/注释和置信分数组成。 HAPI是ML API使用情况的第一个大型数据集,并且是研究ML-AS-A-Service(MLAAS)的独特资源。作为HAPI启用的分析类型的示例,我们表明ML API的性能会随着时间的流逝而大幅变化 - 在特定基准数据集上删除了几个API的精度。即使API的汇总性能保持稳定,其误差模式也可以在2020年至2022年之间在不同的数据子类型中转移。这种更改可能会大大影响使用某些ML API作为组件的整个分析管道。随着时间的流逝,我们进一步使用HAPI研究人口亚组的商业API绩效差异。 HAPI可以刺激MLAA的不断发展领域的更多研究。
translated by 谷歌翻译
美国和全球的两个主要死亡原因是中风和心肌梗塞。两者的根本原因是由破裂或侵蚀的不稳定的动脉粥样硬化斑块释放的,这些斑块阻塞了心脏(心肌梗塞)或大脑(中风)的血管。临床研究表明,在斑块破裂或侵蚀事件中,斑块组成比病变大小更重要。为了确定斑块组成,计算了3D心血管免疫荧光图像的各种细胞类型的斑块病变。但是,手动计算这些细胞是昂贵的,耗时的,并且容易发生人为错误。手动计数的这些挑战激发了对自动化方法进行定位和计算图像中细胞的需求。这项研究的目的是开发一种自动方法,以最少的注释工作在3D免疫荧光图像中准确检测和计数细胞。在这项研究中,我们使用弱监督的学习方法使用点注释来训练悬停网络分割模型,以检测荧光图像中的核。使用点注释的优点是,与像素的注释相比,它们需要更少的精力。为了使用点注释训练悬停的网络模型,我们采用了一种普遍使用的群集标记方法,将点注释转换为精确的细胞核二进制掩模。传统上,这些方法从点注释产生了二进制面具,使该物体周围的区域未标记(通常在模型训练中被忽略)。但是,这些区域可能包含重要信息,有助于确定细胞之间的边界。因此,我们在这些区域使用了熵最小化的损失函数,以鼓励模型在未标记区域上输出更自信的预测。我们的比较研究表明,使用我们的弱训练的悬停网络模型...
translated by 谷歌翻译
机器学习(ML)研究通常集中在模型上,而最突出的数据集已用于日常的ML任务,而不考虑这些数据集对基本问题的广度,困难和忠诚。忽略数据集的基本重要性已引起了重大问题,该问题涉及现实世界中的数据级联以及数据集驱动标准的模型质量饱和,并阻碍了研究的增长。为了解决此问题,我们提出Dataperf,这是用于评估ML数据集和数据集工作算法的基准软件包。我们打算启用“数据棘轮”,其中培训集将有助于评估相同问题的测试集,反之亦然。这种反馈驱动的策略将产生一个良性的循环,该循环将加速以数据为中心的AI。MLCommons协会将维护Dataperf。
translated by 谷歌翻译
在本文中,我们研究了一类二聚体优化问题,也称为简单的双重优化,在其中,我们将光滑的目标函数最小化,而不是另一个凸的约束优化问题的最佳解决方案集。已经开发了几种解决此类问题的迭代方法。 las,它们的收敛保证并不令人满意,因为它们要么渐近,要么渐近,要么是收敛速度缓慢且最佳的。为了解决这个问题,在本文中,我们介绍了Frank-Wolfe(FW)方法的概括,以解决考虑的问题。我们方法的主要思想是通过切割平面在局部近似低级问题的解决方案集,然后运行FW型更新以减少上层目标。当上层目标是凸面时,我们表明我们的方法需要$ {\ mathcal {o}}(\ max \ {1/\ epsilon_f,1/\ epsilon_g \})$迭代才能找到$ \ \ \ \ \ \ epsilon_f $ - 最佳目标目标和$ \ epsilon_g $ - 最佳目标目标。此外,当高级目标是非convex时,我们的方法需要$ {\ MATHCAL {o}}(\ max \ {1/\ epsilon_f^2,1/(\ epsilon_f \ epsilon_g})查找$(\ epsilon_f,\ epsilon_g)$ - 最佳解决方案。我们进一步证明了在“较低级别问题的老年人错误约束假设”下的更强的融合保证。据我们所知,我们的方法实现了所考虑的二聚体问题的最著名的迭代复杂性。我们还向数值实验提出了数值实验。与最先进的方法相比,展示了我们方法的出色性能。
translated by 谷歌翻译
无人驾驶飞行器(UAV)是支持各种服务,包括通信的技术突破之一。UAV将在提高无线网络的物理层安全方面发挥关键作用。本文定义了窃听地面用户与UAV之间的链路的问题,该联接器用作空中基站(ABS)。提出了加强学习算法Q - 学习和深Q网络(DQN),用于优化ABS的位置和传输功率,以增强地面用户的数据速率。如果没有系统了解窃听器的位置,这会增加保密容量。与Q-Learnch和基线方法相比,仿真结果显示了拟议DQN的快速收敛性和最高保密能力。
translated by 谷歌翻译
用于主动假体或orthoses的控制策略使用传感器输入来识别用户的机车意图,并生成用于产生所需运动的相应控制命令。在本文中,我们提出了一种基于学习的共享模型,用于预测不同运动模式的脚踝关节运动,如Lope-Grouding,Stair Ascent,Stair Descent,Slope Ascent和Slope Descent,而无需在它们之间进行分类。从髋关节和膝关节角运动中提取的特征用于使用前馈神经网络的共享模型连续地预测脚踝角度和矩。我们表明,共享模型足以预测不同运动模式的脚踝角度和时刻,而不明确地在模式之间进行分类。所提出的策略表明,为能够适应不同运动模式的智能假肢脚踝设计高级控制器的可能性。
translated by 谷歌翻译
Colleges and universities use predictive analytics in a variety of ways to increase student success rates. Despite the potential for predictive analytics, two major barriers exist to their adoption in higher education: (a) the lack of democratization in deployment, and (b) the potential to exacerbate inequalities. Education researchers and policymakers encounter numerous challenges in deploying predictive modeling in practice. These challenges present in different steps of modeling including data preparation, model development, and evaluation. Nevertheless, each of these steps can introduce additional bias to the system if not appropriately performed. Most large-scale and nationally representative education data sets suffer from a significant number of incomplete responses from the research participants. While many education-related studies addressed the challenges of missing data, little is known about the impact of handling missing values on the fairness of predictive outcomes in practice. In this paper, we set out to first assess the disparities in predictive modeling outcomes for college-student success, then investigate the impact of imputation techniques on the model performance and fairness using a commonly used set of metrics. We conduct a prospective evaluation to provide a less biased estimation of future performance and fairness than an evaluation of historical data. Our comprehensive analysis of a real large-scale education dataset reveals key insights on modeling disparities and how imputation techniques impact the fairness of the student-success predictive outcome under different testing scenarios. Our results indicate that imputation introduces bias if the testing set follows the historical distribution. However, if the injustice in society is addressed and consequently the upcoming batch of observations is equalized, the model would be less biased.
translated by 谷歌翻译